skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhang, Fan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2026
  2. Abstract AB-stacked bilayer graphene has emerged as a fascinating yet simple platform for exploring macroscopic quantum phenomena of correlated electrons. Under large electric displacement fields and near low-density van-Hove singularities, it exhibits a phase with features consistent with Wigner crystallization, including negative dR/dT and nonlinear bias behavior. However, direct evidence for the emergence of an electron crystal at zero magnetic field remains elusive. Here, we explore low-frequency noise consistent with depinning and sliding of a Wigner crystal or solid. At large magnetic fields, we observe enhanced noise at low bias current and a frequency-dependent response characteristic of depinning and sliding, consistent with earlier scanning tunnelling microscopy studies confirming Wigner crystallization in the fractional quantum Hall regime. At zero magnetic field, we detect pronounced AC noise whose peak frequency increases linearly with applied DC current—indicative of collective electron motion. These transport signatures pave the way toward confirming an anomalous Hall crystal. 
    more » « less
  3. Inspired by the success of Self-Supervised Learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of Continual Learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely Self-Supervised Continual Learning (SSCL). It has been shown that the SSCL outperforms Supervised Continual Learning (SCL) as the learned representations are more informative and robust to catastrophic forgetting. However, building upon the training process of SSL, prior SSCL studies involve training all the parameters for each task, resulting to prohibitively high training cost. In this work, we first analyze the training time and memory consumption and reveals that the backward gradient calculation is the bottleneck. Moreover, by investigating the task correlations in SSCL, we further discover an interesting phenomenon that, with the SSL-learned background model, the intermediate features are highly correlated between tasks. Based on these new finding, we propose a new SSCL method with layer-wise freezing which progressively freezes partial layers with the highest correlation ratios for each task to improve training computation efficiency and memory efficiency. Extensive experiments across multiple datasets are performed, where our proposed method shows superior performance against the SoTA SSCL methods under various SSL frameworks. For example, compared to LUMP, our method achieves 1.18x, 1.15x, and 1.2x GPU training time reduction, 1.65x, 1.61x, and 1.6x memory reduction, 1.46x, 1.44x, and 1.46x backward FLOPs reduction, and 1.31%/1.98%/1.21% forgetting reduction without accuracy degradation on three datasets, respectively. 
    more » « less
    Free, publicly-accessible full text available April 23, 2026
  4. Free, publicly-accessible full text available April 7, 2026
  5. Free, publicly-accessible full text available April 1, 2026
  6. Abstract Spin-orbit coupling (SOC) and electron-electron interaction can mutually influence each other and give rise to a plethora of intriguing phenomena in condensed matter systems. In pristine bilayer graphene (BLG), which has weak SOC, intrinsic Lifshitz transitions and concomitant van-Hove singularities lead to the emergence of many-body correlated phases. Layer-selective SOC can be proximity induced by adding a layer of tungsten diselenide (WSe2) on its one side. By applying an electric displacement field, the system can be tuned across a spectrum wherein electronic correlation, SOC, or a combination of both dominates. Our investigations reveal an intricate phase diagram of proximity-induced SOC-selective BLG. Not only does this phase diagram include those correlated phases reminiscent of SOC-free doped BLG, but it also hosts unique SOC-induced states allowing a compelling measurement of valleyg-factor and a correlated insulator at charge neutrality, thereby showcasing the remarkable tunability of the interplay between interaction and SOC in WSe2enriched BLG. 
    more » « less
    Free, publicly-accessible full text available May 28, 2026
  7. Nowadays, parameter-efficient fine-tuning (PEFT) large pre-trained models (LPMs) for downstream task have gained significant popularity, since it could significantly minimize the training computational overhead. The representative work, LoRA [1], learns a low-rank adaptor for a new downstream task, rather than fine-tuning the whole backbone model. However, for inference, the large size of the learned model remains unchanged, leading to in-efficient inference computation. To mitigate this, in this work, we are the first to propose a learning-to-prune methodology specially designed for fine-tuning downstream tasks based on LPMs with low-rank adaptation. Unlike prior low-rank adaptation approaches that only learn the low-rank adaptors for downstream tasks, our method further leverages the Gumbel-Sigmoid tricks to learn a set of trainable binary channel-wise masks that automatically prune the backbone LPMs. Therefore, our method could leverage the benefits of low-rank adaptation to reduce the training parameters size and smaller pruned backbone LPM size for efficient inference computation. Extensive experiments show that the Pruned-RoBbase model with our method achieves an average channel-wise structured pruning ratio of 24.5% across the popular GLUE Benchmark, coupled with an average of 18% inference time speed-up in real NVIDIA A5000 GPU. The Pruned-DistilBERT shows an average of 13% inference time improvement with 17% sparsity. The Pruned-LLaMA-7B model achieves up to 18.2% inference time improvement with 24.5% sparsity, demonstrating the effectiveness of our learnable pruning approach across different models and tasks. 
    more » « less
    Free, publicly-accessible full text available January 20, 2026